Design Goals
SATAN was not built to solve any single problem; rather, it was built as
a research tool, to see what would happen if freely available state of the
art software tools were merged with as much security knowledge as we could
pool together were crammed into one (at least semi-)cohesive package.
Our design goals were:
- Discover if the problem of mapping out the security of large networks
was a solvable problem.
- Use the traditional Unix toolbox approach of program design.
- Use as many freely available software tools that were currently
useful and available, to cut down development time to a minimum.
- Design a security package that was educational as well as useful.
- Create a tool that was freely available to anyone who wanted to use it.
- Discover and uncover as much security and network information as
possible without being destructive.
- Create the best (and, at the creation/development stages, quite nearly the
only) investigative security network tool available, at any price.
- Spur further program development (commercial or academic) in this very
rich area.
- Show just how insecure the Internet really is, and how much every
site depends on a large number of potentially insecure other sites.
It would be impossible to write all of the functionality necessary to
make SATAN work, with only two (very!) part-time programmers. We
decided from the start to steal as much information, tools, and
methodologies (we have no shame!) as possible to create SATAN. In
particular, using perl and the HTML interface were vital to the
completion of the package. It would be wonderful if we could have a
mapping program to graphically display the results, but we haven't found
anything suitable so far.
Optimizing SATAN for speed of execution was not much of a design
consideration. It was designed to be an information gathering tool
that would be run periodically; a fairly large network (say, a
thousand nodes) can be scanned in several hours. In all likelihood, the
majority of time consumed when using SATAN will be deciding on what
actions to take based on the results that were found. In any case, the
network timeouts and uncertainties make real optimization very
difficult. Fortunately, perl was fast enough (thanks, Larry!) to make
performance a non-issue for most network queries and work.
Back to the Introductory TOC/Index